Goto

Collaborating Authors

 temporal weight


Temporal Weights

Kohan, Adam, Rietman, Ed, Siegelmann, Hava

arXiv.org Artificial Intelligence

In artificial neural networks, weights are a static representation of synapses. However, synapses are not static, they have their own interacting dynamics over time. To instill weights with interacting dynamics, we use a model describing synchronization that is capable of capturing core mechanisms of a range of neural and general biological phenomena over time. An ideal fit for these Temporal Weights (TW) are Neural ODEs, with continuous dynamics and a dependency on time. The resulting recurrent neural networks efficiently model temporal dynamics by computing on the ordering of sequences, and the length and scale of time. By adding temporal weights to a model, we demonstrate better performance, smaller models, and data efficiency on sparse, irregularly sampled time series datasets.


Deep Adaptive Temporal Pooling for Activity Recognition

Song, Sibo, Cheung, Ngai-Man, Chandrasekhar, Vijay, Mandal, Bappaditya

arXiv.org Artificial Intelligence

Deep neural networks have recently achieved competitive accuracy for human activity recognition. However, there is room for improvement, especially in modeling long-term temporal importance and determining the activity relevance of different temporal segments in a video. To address this problem, we propose a learnable and differentiable module: Deep Adaptive Temporal Pooling (DATP). DATP applies a self-attention mechanism to adaptively pool the classification scores of different video segments. Specifically, using frame-level features, DATP regresses importance of different temporal segments and generates weights for them. Remarkably, DATP is trained using only the video-level label. There is no need of additional supervision except video-level activity class label. We conduct extensive experiments to investigate various input features and different weight models. Experimental results show that DATP can learn to assign large weights to key video segments. More importantly, DATP can improve training of frame-level feature extractor. This is because relevant temporal segments are assigned large weights during back-propagation. Overall, we achieve state-of-the-art performance on UCF101, HMDB51 and Kinetics datasets.